110 research outputs found

    On the Machine Learning Techniques for Side-channel Analysis

    Get PDF
    Side-channel attacks represent one of the most powerful category of attacks on cryptographic devices with profiled attacks in a prominent place as the most powerful among them. Indeed, for instance, template attack is a well-known real-world attack that is also the most powerful attack from the information theoretic perspective. On the other hand, machine learning techniques have proven their quality in a numerous applications where one is definitely side-channel analysis, but they come with a price. Selecting the appropriate algorithm as well as the parameters can sometimes be a difficult and time consuming task. Nevertheless, the results obtained until now justify such an effort. However, a large part of those results use simplification of the data relation from the one perspective and extremely powerful machine learning techniques from the other side. In this paper, we concentrate first on the tuning part, which we show to be of extreme importance. Furthermore, since tuning represents a task that is time demanding, we discuss how to use hyperheuristics to obtain good results in a relatively short amount of time. Next, we provide an extensive comparison between various machine learning techniques spanning from extremely simple ones ( even without any parameters to tune), up to methods where previous experience is a must if one wants to obtain competitive results. To support our claims, we give extensive experimental results and discuss the necessary conditions to conduct a proper machine learning analysis. Besides the machine learning algorithms' results, we give results obtained with the template attack. Finally, we investigate the influence of the feature (in)dependence in datasets with varying amount of noise as well as the influence of feature noise and classification noise. In order to strengthen our findings, we also discuss provable machine learning algorithms, i.e., PAC learning algorithms

    Watermarking Graph Neural Networks based on Backdoor Attacks

    Full text link
    Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications. Building a powerful GNN model is not a trivial task, as it requires a large amount of training data, powerful computing resources, and human expertise in fine-tuning the model. What is more, with the development of adversarial attacks, e.g., model stealing attacks, GNNs raise challenges to model authentication. To avoid copyright infringement on GNNs, it is necessary to verify the ownership of the GNN models. In this paper, we present a watermarking framework for GNNs for both graph and node classification tasks. We 1) design two strategies to generate watermarked data for the graph classification task and one for the node classification task, 2) embed the watermark into the host model through training to obtain the watermarked GNN model, and 3) verify the ownership of the suspicious model in a black-box setting. The experiments show that our framework can verify the ownership of GNN models with a very high probability (around 95%95\%) for both tasks. Finally, we experimentally show that our watermarking approach is robust against two model modifications and an input reformation defense against backdoor attacks.Comment: 13 pages, 9 figure

    CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information

    Get PDF
    Machine learning has become mainstream across industries. Numerous examples proved the validity of it for security applications. In this work, we investigate how to reverse engineer a neural network by using only power side-channel information. To this end, we consider a multilayer perceptron as the machine learning architecture of choice and assume a non-invasive and eavesdropping attacker capable of measuring only passive side-channel leakages like power consumption, electromagnetic radiation, and reaction time. We conduct all experiments on real data and common neural net architectures in order to properly assess the applicability and extendability of those attacks. Practical results are shown on an ARM CORTEX-M3 microcontroller. Our experiments show that the side-channel attacker is capable of obtaining the following information: the activation functions used in the architecture, the number of layers and neurons in the layers, the number of output classes, and weights in the neural network. Thus, the attacker can effectively reverse engineer the network using side-channel information. Next, we show that once the attacker has the knowledge about the neural network architecture, he/she could also recover the inputs to the network with only a single-shot measurement. Finally, we discuss several mitigations one could use to thwart such attacks.Comment: 15 pages, 16 figure

    Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks

    Full text link
    Deep learning models achieve excellent performance in numerous machine learning tasks. Yet, they suffer from security-related issues such as adversarial examples and poisoning (backdoor) attacks. A deep learning model may be poisoned by training with backdoored data or by modifying inner network parameters. Then, a backdoored model performs as expected when receiving a clean input, but it misclassifies when receiving a backdoored input stamped with a pre-designed pattern called "trigger". Unfortunately, it is difficult to distinguish between clean and backdoored models without prior knowledge of the trigger. This paper proposes a backdoor detection method by utilizing a special type of adversarial attack, universal adversarial perturbation (UAP), and its similarities with a backdoor trigger. We observe an intuitive phenomenon: UAPs generated from backdoored models need fewer perturbations to mislead the model than UAPs from clean models. UAPs of backdoored models tend to exploit the shortcut from all classes to the target class, built by the backdoor trigger. We propose a novel method called Universal Soldier for Backdoor detection (USB) and reverse engineering potential backdoor triggers via UAPs. Experiments on 345 models trained on several datasets show that USB effectively detects the injected backdoor and provides comparable or better results than state-of-the-art methods

    Rethinking the Trigger-injecting Position in Graph Backdoor Attack

    Full text link
    Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and still retain state-of-the-art performance on the clean inputs. While there are already some works on backdoor attacks on Graph Neural Networks (GNNs), the backdoor trigger in the graph domain is mostly injected into random positions of the sample. There is no work analyzing and explaining the backdoor attack performance when injecting triggers into the most important or least important area in the sample, which we refer to as trigger-injecting strategies MIAS and LIAS, respectively. Our results show that, generally, LIAS performs better, and the differences between the LIAS and MIAS performance can be significant. Furthermore, we explain these two strategies' similar (better) attack performance through explanation techniques, which results in a further understanding of backdoor attacks in GNNs

    Bent functions in the partial spread class generated by linear recurring sequences

    Get PDF
    We present a construction of partial spread bent functions using subspaces generated by linear recurring sequences (LRS). We first show that the kernels of the linear mappings defined by two LRS have a trivial intersection if and only if their feedback polynomials are relatively prime. Then, we characterize the appropriate parameters for a family of pairwise coprime polynomials to generate a partial spread required for the support of a bent function, showing that such families exist if and only if the degrees of the underlying polynomials are either 1 or 2. We then count the resulting sets of polynomials and prove that, for degree 1, our LRS construction coincides with the Desarguesian partial spread. Finally, we perform a computer search of all PS− and PS+ bent functions of n=8 variables generated by our construction and compute their 2-ranks. The results show that many of these functions defined by polynomials of degree d=2 are not EA-equivalent to any Maiorana–McFarland or Desarguesian partial spread function

    Lightweight but Not Easy: Side-channel Analysis of the Ascon Authenticated Cipher on a 32-bit Microcontroller

    Get PDF
    Ascon is a recently standardized suite of symmetric cryptography for authenticated encryption and hashing algorithms designed to be lightweight. The Ascon scheme has been studied since it was introduced in 2015 for the CAESAR competition, and many efforts have been made to transform this hardware-oriented scheme to work with any embedded device architecture. Ascon is designed with side-channel resistance in mind and can also be protected with countermeasures against side-channel analysis. Up to now, the effort of side-channel analysis is mainly put on hardware implementations, with only a few studies being published on the real-world side-channel security of software implementations. In this paper, we give a comprehensive view of the side-channel security of Ascon implemented on a 32-bit microcontroller for both the reference and a protected implementation. We show different potential leakage functions that can lead to real-world leakages and demonstrate the most potent attacks that can be obtained with the corresponding leakage functions. We present our results using correlation power analysis (CPA) and deep learning-based side-channel analysis and provide a practical estimation of the efforts needed for an attacker to recover the complete key used for authenticated encryption. Our results show that the reference implementation is not side-channel secure since an attacker can recover the full key with 8,000 traces using CPA and around 1,000 traces with deep learning analysis. While second-order CPA cannot recover any part of the secret, deep learning side-channel analysis can recover partial keys with 800 traces on the protected implementation. Unfortunately, the model used for multi-task key recovery lacks the generalization to correctly recover all partial keys for the full key attack

    On the exponents of APN power functions and Sidon sets, sum-free sets, and Dickson polynomials

    Get PDF
    We derive necessary conditions related to the notions, in additive combinatorics, of Sidon sets and sum-free sets, on those exponents d∈Z/(2n−1)Zd\in {\mathbb Z}/(2^n-1){\mathbb Z} which are such that F(x)=xdF(x)=x^d is an APN function over F2n{\mathbb F}_{2^n} (which is an important cryptographic property). We study to which extent these new conditions may speed up the search for new APN exponents dd. We also show a new connection between APN exponents and Dickson polynomials: F(x)=xdF(x)=x^d is APN if and only if the reciprocal polynomial of the Dickson polynomial of index dd is an injective function from {y∈F2n∗;trn(y)=0}\{y\in {\Bbb F}_{2^n}^*; tr_n(y)=0\} to F2n∖{1}{\Bbb F}_{2^n}\setminus \{1\}. This also leads to a new and simple connection between Reversed Dickson polynomials and reciprocals of Dickson polynomials in characteristic 2 (which generalizes to every characteristic thanks to a small modification): the squared Reversed Dickson polynomial of some index and the reciprocal of the Dickson polynomial of the same index are equal
    • …
    corecore